Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Technology term recognition with comprehensive constituency parsing
Junjie ZHU, Li YU, Shengwen LI, Changzheng ZHOU
Journal of Computer Applications    2024, 44 (4): 1072-1079.   DOI: 10.11772/j.issn.1001-9081.2023040532
Abstract18)   HTML0)    PDF (1342KB)(5)       Save

Technology terms are used to communicate information accurately in the field of science and technology. Automatically recognizing technology terms from text can help experts and the public to discover, recognize, and apply new technologies, which is great of value, but unsupervised technology term recognition methods still have some limitations, such as complex rules and poor adaptability. To enhance the ability to recognize technology terms from text, an unsupervised technology term recognition method was proposed. Firstly, a syntactic structure tree was constructed through constituency parsing. Then, the candidate technology terms were extracted from both top-down and bottom-up perspectives. Finally, the statistical frequency and semantic information were combined to determine the most appropriate technology terms. Besides, a technology term dataset was constructed to validate the effectiveness of the proposed method. Experimental results on the proposed dataset show that the proposed method with top-down extraction has the F1 score improved by 4.55 percentage points compared to the dependency-based method. Meanwhile, the analysis results conducted on case study in the field of 3D printing show that the recognized technology terms by the proposed method are in line with the development of the field, which can be used to trace the development process of technology and depict the evolution path of technology, so as to provide references for understanding, discovering, and exploring future technologies of the field.

Table and Figures | Reference | Related Articles | Metrics
Parallel design and implementation of synthetic view distortion change algorithm in reconfigurable structure
JIANG Lin, SHI Jiaqi, LI Yuancheng
Journal of Computer Applications    2021, 41 (6): 1734-1740.   DOI: 10.11772/j.issn.1001-9081.2020091462
Abstract279)      PDF (1262KB)(268)       Save
Focused on the high computational time complexity of the depth map based Synthesized View Distortion Change (SVDC) algorithm in 3D High Efficiency Video Coding (3D-HEVC), a new parallelization method of SVDC algorithm based on hybrid granularity was proposed under the reconfigurable array structure. Firstly, the SVDC algorithm was divided into two parts:Virtual View Synthesis (VVS) and distortion value calculation. Secondly, the VVS part was accelerated by pipeline operation, and the distortion value calculation part was accelerated by dividing into two levels:task level, which means dividing the synthesized image according to pixels, and instruction level, that is dividing the distortion values inside the pixel by the calculation process. Finally, a reconfigurable mechanism was used to parallelize the VVS part and distortion value calculation part. Theoretical analysis and hardware simulation results show that, in terms of execution time, the proposed method has the speedup ratio of 2.11 with 4 Process Elements (PEs). Compared with the SVDC algorithms based on Low Level Virtual Machine (LLVM) and Open Multi-Processing (OpenMP), the proposed method has the calculation time reduced by 18.56% and 21.93% respectively. It can be seen that the proposed method can mine the parallelism of the SVDC algorithm, and effectively shorten the execution time of the SVDC algorithm by combining with the characteristics of the reconfigurable array structure.
Reference | Related Articles | Metrics
Fast algorithm for distance regularized level set evolution model
YUAN Quan, WANG Yan, LI Yuxian
Journal of Computer Applications    2020, 40 (9): 2743-2747.   DOI: 10.11772/j.issn.1001-9081.2020010106
Abstract284)      PDF (1693KB)(357)       Save
The gradient descent method has poor convergence and is sensitive to local minimum. Therefore, an improved NAG (Nesterov’s Accelerated Gradient) algorithm was proposed to replace the gradient descent algorithm in the Distance Regularized Level Set Evolution (DRLSE) model, so as to obtain a fast image segmentation algorithm based on NAG algorithm. First, the initial level set evolution equation was given. Second, the gradient was calculated by using the NAG algorithm. Finally, the level set function was updated continuously, avoiding the level set function falling into local minimum. Experimental results show that compared with the original algorithm in the DRLSE model, the proposed algorithm has the number of iterations reduced by about 30%, and the CPU running time reduced by more than 30%. The algorithm is simple to implement, and can be applied to segment the images with high real-time requirement such as infrared images and medical images .
Reference | Related Articles | Metrics
Bionic image enhancement algorithm based on top-bottom hat transformation
YU Tianhe, LI Yuzuo, LAN Chaofeng
Journal of Computer Applications    2020, 40 (5): 1440-1445.   DOI: 10.11772/j.issn.1001-9081.2019081496
Abstract409)      PDF (3089KB)(303)       Save

In view of the low contrast, poor details and low color saturation of low-illumination images, by analyzing the non-linear relationship between the subjective brightness sensation of the human eye and the transmission characteristics of the receptive field in the retinal ganglion cells of the human eye, a bionic image enhancement algorithm combining top-hat transformation and bottom-hat transformation was proposed. Firstly, the RGB (Red, Green, Blue) color space of low-illumination image was converted into HSV (Hue, Saturation, Value) space, and the global brightness logarithmic transformation was performed on the brightness component. Secondly, the retinal neuron receptive wild tri-Gaussian model was used to adjust the contrast of the local edge of the image. Finally, top-hat transformation and bottom-hat transformation were used to assist the extraction of background with high brightness. The experimental results show that the low-illumination images enhanced by the proposed algorithm have clear details and high contrast, without the problems of uneven illumination and image depth of field in the images captured by the device. These enhanced images have high color saturation and strong visual sensation effect.

Reference | Related Articles | Metrics
High dynamic range imaging algorithm based on luminance partition fuzzy fusion
LIU Ying, WANG Fengwei, LIU Weihua, AI Da, LI Yun, YANG Fanchao
Journal of Computer Applications    2020, 40 (1): 233-238.   DOI: 10.11772/j.issn.1001-9081.2019061032
Abstract438)      PDF (1027KB)(282)       Save
To solve the problems of color distortion and local detail information loss caused by the histogram expansion of High Dynamic Range (HDR) image generated by single image, an imaging algorithm of high dynamic range image based on luminance partition fusion was proposed. Firstly, the luminance component of normal exposure color image was extracted, and the luminance was divided into two intervals according to luminance threshold. Then, the luminance ranges of images of two intervals were extended by the improved exponential function, so that the luminance of low-luminance area was increased, the luminance of high-luminance area was decreased, and the ranges of two areas were both expanded, increasing overall contrast of image, and preserving the color and detail information. Finally, the extended image and original normal exposure image were fused into a high dynamic image based on fuzzy logic. The proposed algorithm was analyzed from both subjective and objective aspects. The experimental results show that the proposed algorithm can effectively expand the luminance range of image and keep the color and detail information of scene, and the generated image has better visual effect.
Reference | Related Articles | Metrics
Adaptive hierarchical searchable encryption scheme based on learning with errors
ZHANG En, HOU Yingying, LI Gongli, LI Huimin, LI Yu
Journal of Computer Applications    2020, 40 (1): 148-156.   DOI: 10.11772/j.issn.1001-9081.2019060961
Abstract436)      PDF (1430KB)(357)       Save
To solve the problem that the existing hierarchical searchable encryption scheme cannot effectively resist quantum attack and cannot flexibly add and delete the level, a scheme of Adaptive Hierarchical Searchable Encryption based on learning with errors (AHSE) was proposed. Firstly, the proposed scheme was made to effectively resist the quantum attack by utilizing the multidimensional characteristic of lattices and based on the Learning With Errors (LWE) problem on lattices. Secondly, the condition key was constructed to divide the users into different levels clearly, making the user only able to search the files at his own level, so as to achieve effective level access control. At the same time, a segmented index structure with good adaptability was designed, whose levels could be added and deleted flexibly, meeting the requirements of access control with different granularities. Moreover, all users in this scheme were able to search by only sharing one segmented index table, which effectively improves the search efficiency. Finally, theoretical analysis shows that the update, deletion and level change of users and files in this scheme is simple and easy to operate, which are suitable for dynamic encrypted database, cloud medical system and other dynamic environments.
Reference | Related Articles | Metrics
Efficient genetic comparison scheme for user privacy protection
LI Gongli, LI Yu, ZHANG En, YIN Tianyu
Journal of Computer Applications    2020, 40 (1): 136-142.   DOI: 10.11772/j.issn.1001-9081.2019061080
Abstract341)      PDF (1224KB)(241)       Save
Concerning the problem that current genetic comparison protocols generally require a trusted third party, which may result in the leakage of a wide range of private data, a genetic comparison scheme based on linear scan was proposed. The gene sequences of two parties were first encoded based on Garbled Circuit (GC), and then the genome database was linearly scanned and the garbled circuit was used to compare gene sequence of user with all gene sequences in database. The above scheme can achieve genetic comparison under the premise of protecting user privacy of both parties. However, the scheme needs to scan whole database with time complexity of O( n), and is inefficient when the genome database is large. In order to improve the efficiency of genetic comparison, a genetic comparison scheme based on Oblivious Random Access Memory (ORAM) was further proposed, in which genetic data was stored at ORAM first, then only the data blocks on target path were picked out to perform genetic comparison by using garbled circuit. This scheme has the number of comparisons sub-linear to the size of database and time complexity of O (log n). The experimental results show that the genetic comparison scheme based on ORAM reduces the number of comparisons from O( n) to O(log n) while realizing privacy protection, significantly decreases the time complexity of comparison operation. It can be used for disease diagnosis, especially in the case with large genome database.
Reference | Related Articles | Metrics
Dual-antenna attitude determination algorithm based on low-cost receiver
WANG Shouhua, LI Yunke, SUN Xiyan, JI Yuanfa
Journal of Computer Applications    2019, 39 (8): 2381-2385.   DOI: 10.11772/j.issn.1001-9081.2018122554
Abstract436)      PDF (723KB)(257)       Save
Concerning the problem that low-cost Dual-antenna Attitude determination System (DAS) has low accuracy and gross error because of using direct solution, an improved algorithm based on carrier phase and pseudo-range double-difference Real-Time Kinematic (RTK) Kalman filter was proposed. Firstly, the baseline length was employed as the observation, then the precise baseline length obtained in advance was taken as the observation error. Secondly, the position of master antenna was corrected according to the epoch time interval of the slave antenna receiver and the integer ambiguity was solved by MLABMDA (Modified LABMDA) algorithm. Experimental results in static and dynamic mode show that the accuracy of the heading angle calculated by the proposed algorithm is about 1 degree and the calculated pitch angle accuracy is about 2-3 degrees in the case of baseline length 1.1 m with GPS and Beidou dual systems. The proposed algorithm improves the robustness and accuracy of the system greatly compared with the traditional dual-antenna attitude determination by direct solution.
Reference | Related Articles | Metrics
Microoperation-based parameter auto-optimization method of Hadoop
LI Yunshu, TENG Fei, LI Tianrui
Journal of Computer Applications    2019, 39 (6): 1589-1594.   DOI: 10.11772/j.issn.1001-9081.2018122592
Abstract387)      PDF (931KB)(250)       Save
As a large-scale distributed data processing framework, Hadoop has been widely used in industry during the past few years. Currently manual parameter optimization and experience-based parameter optimization are ineffective due to complex running process and large parameter space. In order to solve this problem, a method and an analytical framework for Hadoop parameter auto-optimization were proposed. Firstly, the operation process of a job was broken down into several microoperations and the microoperations were determined from the angle of finer granularity directly affected by variable parameters, so that the relationship between parameters and the execution time of a single microoperation was able to be analyzed. Then, by reconstructing the job operation process based on microoperations, a model of the relationship between parameters and the execution time of whole job was established. Finally, various searching optimization algorithms were applied on this model to efficiently and quickly obtain the optimized system parameters. Experiments were conducted with two types of jobs, terasort and wordcount. The experimental results show that, compared with the default parameters condition, the proposed method reduce the job execution time by at least 41% and 30% respectively. The proposed method can effectively improve the job execution efficiency of Hadoop and shorten the job execution time.
Reference | Related Articles | Metrics
HSWAP: numerical simulation workflow management platform suitable for high performance computing environment
ZHAO Shicao, XIAO Yonghao, DUAN Bowen, LI Yufeng
Journal of Computer Applications    2019, 39 (6): 1569-1576.   DOI: 10.11772/j.issn.1001-9081.2018122606
Abstract567)      PDF (1328KB)(306)       Save
Concerning the construction of integrated application of "modeling, computation, analysis, optimization" workflow under High Performance Computing (HPC) environment, HPC Simulation Workflow Application Platform (HSWAP) supporting numerical simulation software encapsulation and numerical simulation workflow interaction design was developed. Firstly, based on the modeling of runtime characteristics of numerical simulation activities, the component model was built. Then, the control and data dependency relationships between simulation activities were represented by the workflow, creating a formal numerical simulation workflow model. The formed workflow model was able to be automatically parsed in the platform to adapt to HPC resources. Therefore, HSWAP platform could be used for automatic generation and scheduling of a batch of related numerical simulation tasks, screening technical details of HPC resources from domain users. The platform provided Web Portal services, which supports the push of interactive interfaces of graphical numerical simulation programs. The platform is already deployed and applied at Supercomputing Center and with this platform, the integration of numerical simulation workflows with up to 10 numerical simulation softwares and 20 computing task nodes can be completed in 2 person-month.
Reference | Related Articles | Metrics
Time lag based temporal dependency episodes discovery
GU Peiyue, LIU Zheng, LI Yun, LI Tao
Journal of Computer Applications    2019, 39 (2): 421-428.   DOI: 10.11772/j.issn.1001-9081.2018061366
Abstract414)      PDF (1181KB)(290)       Save
Concerning the problem that a predefined time window is usually used to mine simple association dependencies between events in the traditional frequent episode discovery, which cannot effectively handle interleaved temporal correlations between events, a concept of time-lag episode discovery was proposed. And on the basis of frequent episode discovery, Adjacent Event Matching set (AEM) based time-lag episode discovery algorithm was proposed. Firstly, a probability statistical model introduced with time-lag was introduced to realize event sequence matching and handle optional interleaved associations without a predefined time window. Then the discovery of time lag was formulated as an optimization problem which can be solved iteratively to obtain time interval distribution between time-lag episodes. Finally, the hypothesis test was used to distinguish serial and parallel time-lag episodes. The experimental results show that compared with Iterative Closest Event (ICE) algorithm which is the latest method of time-lag mining, the Kullback-Leibler (KL) divergence between true and experimental distributions discovered by AEM is 0.056 on average, which is decreased by 20.68%. AEM algorithm measures the possibility of multiple matches of events through a probability statistical model of time lag and obtains a one-to-many adjacent event matching set, which is more effective than the one-to-one matching set in ICE for simulating the actual situation.
Reference | Related Articles | Metrics
Cross-social network user alignment algorithm based on knowledge graph embedding
TENG Lei, LI Yuan, LI Zhixing, HU Feng
Journal of Computer Applications    2019, 39 (11): 3198-3203.   DOI: 10.11772/j.issn.1001-9081.2019051143
Abstract497)      PDF (862KB)(293)       Save
Aiming at the poor network embedding performance of cross-social network user alignment algorithm and the inability to guarantee the quality of negative samples generated by negative sampling method, a cross-social network KGEUA (Knowledge Graph Embedding User Alignment) algorithm was proposed. In the embedding stage, some known anchor user pairs were used for the positive sample expansion, and the Near_K negative sampling method was proposed to generate negative examples. Finally, the two social networks were embedded into a unified low-dimensional vector space with the knowledge graph embedding method. In the alignment stage, the existing user similarity measurement method was improved, the proposed structural similarity was combined with the traditional cosine similarity to measure the user similarity jointly, and an adaptive threshold-based greedy matching method was proposed to align users. Finally, the newly aligned user pairs were added to the training set to continuously optimize the vector space. The experimental results show that the proposed algorithm has the hits@30 value of 67.7% on the Twitter-Foursquare dataset, which is 3.3 to 34.8 percentage points higher than that of the state-of-the-art algorithm, improving the user alignment performance effectively.
Reference | Related Articles | Metrics
Point-of-Interest recommendation algorithm combining location influence
XU Chao, MENG Fanrong, YUAN Guan, LI Yuee, LIU Xiao
Journal of Computer Applications    2019, 39 (11): 3178-3183.   DOI: 10.11772/j.issn.1001-9081.2019051087
Abstract396)      PDF (935KB)(272)       Save
Focused on the issue that Point-Of-Interest (POI) recommendation has low recommendation accuracy and efficiency, with deep analysis of the influence of social factors and geographical factors in POI recommendation, a POI recommendation algorithm combining location influence was presented. Firstly, in order to solve the sparseness of sign-in data, the 2-degree friends were introduced into the collaborative filtering algorithm to construct a social influence model, and the social influence of the 2-degree friends on the users were obtained by calculating experience and friend similarity. Secondly, by deep consideration of the influence of geographical factors on POI, a location influence model was constructed based on the analysis of social networks. The users' influences were discovered through the PageRank algorithm, and the location influences were calculated by the POI sign-in frequency, obtaining overall geographical preference. Moreover, kernel density estimation method was used to model the users' sign-in behaviors and obtain the personalized geographical features. Finally, the social model and the geographic model were combined to improve the recommendation accuracy, and the recommendation efficiency was improved by constructing the candidate POI recommendation set. Experiments on Gowalla and Yelp sign-in datasets show that the proposed algorithm can quickly recommend POIs for users, and has high accuracy and recall rate than Location Recommendation with Temporal effects (LRT) algorithm and iGSLR (Personalized Geo-Social Location Recommendation) algorithm.
Reference | Related Articles | Metrics
Image matching method with illumination robustness
WANG Yan, LYU Meng, MENG Xiangfu, LI Yuhao
Journal of Computer Applications    2019, 39 (1): 262-266.   DOI: 10.11772/j.issn.1001-9081.2018061210
Abstract460)      PDF (774KB)(250)       Save
Focusing on the problem that current image matching algorithm based on local feature has low correct rate of illumination change sensitive matching, an image matching algorithm with illumination robustness was proposed. Firstly, a Real-Time Contrast Preserving decolorization (RTCP) algorithm was used for grayscale image, and then a contrast stretching function was used to simulate the influence of different illumination transformation on image to extract feature points of anti-illumination transformation. Finally, a feature point descriptor was established by using local intensity order pattern. According to the Euclidean distance of local feature point descriptor of image to be matched, the Euclidean distance was determined to be a pair matching point. In open dataset, the proposed algorithm was compared with Scale Invariant Feature Transform (SIFT) algorithm, Speeded Up Robust Feature (SURF) algorithm, the "wind" (KAZE) algorithm and ORB (Oriented FAST and Rotated, BRIEF) algorithm in matching speed and accuracy. The experimental results show that with the increase of image brightness difference, SIFT algorithm, SURF algorithm, the "wind" algorithm and ORB algorithm reduce matching accuracy rapidly, and the proposed algorithm decreases matching accuracy slowly and the accuracy is higher than 80%. The proposed algorithm is slower to detect feature points and has a higher descriptor dimension, with an average time of 23.47 s. The matching speed is not as fast as the other four algorithms, but the matching quality is much better than them. The proposed algorithm can overcome the influence of illumination change on image matching.
Reference | Related Articles | Metrics
Fault detection strategy based on local neighbor standardization and dynamic principal component analysis
ZHANG Cheng, GUO Qingxiu, FENG Liwei, LI Yuan
Journal of Computer Applications    2018, 38 (9): 2730-2734.   DOI: 10.11772/j.issn.1001-9081.2018010071
Abstract561)      PDF (785KB)(255)       Save
Aiming at the processes with dynamic and multimode characteristics, a fault detection strategy based on Local Neighbor Standardization (LNS) and Dynamic Principal Component Analysis (DPCA) was proposed. First, the K nearest neighbors set of each sample in training data set was found, then the mean and standard deviation of each variable were calculated. Next, the above mean and standard deviation were applied to standardize the current samples. At last, the traditional DPCA was applied in the new data set to determine the control limits of T 2 and SPE statistics respectively for fault detection. LNS can eliminate the multimode characteristic of a process and make the new data set follow a multivariate Gaussian distribution; meanwhile, the feature of a outlier deviating from normal trajectory can also be maintained. LNS-DPCA can reduce the impact of multimode structure and improve the detectability of fault in processes with dynamic property. The efficiency of the proposed strategy was implemented in a simulated case and the penicillin fermentation process. The experimental results indicate that the proposed method outperforms the Principal Component Analysis (PCA), DPCA and Fault Detection based on K Nearest Neighbors (FD- KNN).
Reference | Related Articles | Metrics
Quality evaluation model of network operation and maintenance based on correlation analysis
WU Muyang, LIU Zheng, WANG Yang, LI Yun, LI Tao
Journal of Computer Applications    2018, 38 (9): 2535-2542.   DOI: 10.11772/j.issn.1001-9081.2018020412
Abstract571)      PDF (1421KB)(355)       Save
Traditional network operation and maintenance evaluation method has two problems. First, it is too dependent on domain experts' experience in indicator selection and weight assignment, so that it is difficult to obtain accurate and comprehensive assessment results. Second, the network operation and maintenance quality involves data from multiple manufacturers or multiple devices in different formats and types, and a surge of users brings huge amounts of data. To solve the problems mentioned above, an indicator selection method based on correlation was proposed. The method focuses on the steps of indicator selection in the process of evaluation. By comparing the strength of the correlation between the data series of indicators, the original indicators could be classified into different clusters, and then the key indicators in each cluster could be selected to construct a key indicators system. The data processing methods and weight determination methods without human participation were also utilized into the network operation and maintenance quality evaluation model. In the experiments, the indicators selected by the proposed method cover 72.2% of the artificial indicators. The information overlap rate is 31% less than the manual indicators'. The proposed method can effectively reduce human involvement, and has a higher prediction accuracy for the alarm.
Reference | Related Articles | Metrics
Bluetooth location algorithm based on feature matching and distance weighting
LU Mingchi, WANG Shouhua, LI Yunke, JI Yuanfa, SUN Xiyan, DENG Guihui
Journal of Computer Applications    2018, 38 (8): 2359-2364.   DOI: 10.11772/j.issn.1001-9081.2018020295
Abstract645)      PDF (966KB)(449)       Save
Focusing on the issues that large fluctuation of Received Signal Strength Indication (RSSI), complex clustering of fingerprint database and large positioning error in traditional iBeacon fingerprinting, a new Bluetooth localization algorithm based on sort feature matching and distance weighting was proposed. In the off-line stage, the RSSI vector size was used to generate the sorting characteristic code. The generated code combined with the information of the position coordinates constituted the fingerprint information, to form the fingerprint library. While in the online positioning stage, the RSSI was firstly weighted by sliding window. Then, indoor pedestrian positioning was achieved by using the sort eigenvector fingerprint matching positioning algorithm and distance-based optimal Weighted K Nearest Neighbors (WKNN). In the localization simulation experiments, the feature codes were used for automatical clustering to reduce the complexity of clustering with maximum error of 0.952 m of indoor pedestrian localization.
Reference | Related Articles | Metrics
Batch process monitoring based on k nearest neighbors in discriminated kernel principle component space
ZHANG Cheng, GUO Qingxiu, LI Yuan
Journal of Computer Applications    2018, 38 (8): 2185-2191.   DOI: 10.11772/j.issn.1001-9081.2018020345
Abstract389)      PDF (977KB)(296)       Save
Aiming at the nonlinear and multi-mode features of batch processes, a fault detection method for batch process based on k Nearest Neighbors ( kNN) rule in Discriminated kernel Principle Component space, namely Dis-kPC kNN, was proposed. Firstly, in kernel Principal Component Analysis (kPCA), according to discriminating category labels, the kernel window width parameter was selected between within-class width and between-class width, thus the kernel matrix can effectively extract data correlation features and keep accurate category information. Then kNN rule was used to replace the conventional T 2 statistical method in the kernel principal component space, which can deal with fault detection of process with nonlinear and multi-mode features. Finally, the proposed method was validated in the numerical simulation and the semiconductor etching process. The experimental results show that the kNN rule in discriminated kernel principle component space can effectively deal with the nonlinear and multi-mode conditions, improve the computational efficiency and reduce memory consumption, in addition, the fault detection rate is significantly better than the comparative methods.
Reference | Related Articles | Metrics
Fault detection for multistage process based on improved local neighborhood standardization and kNN
FENG Liwei, ZHANG Cheng, LI Yuan, XIE Yanhong
Journal of Computer Applications    2018, 38 (7): 2130-2135.   DOI: 10.11772/j.issn.1001-9081.2017112701
Abstract360)      PDF (905KB)(272)       Save
Concerning the problem that multistage process data has the characteristics of multi-center and different process structure, a fault detection based on Improved Local Neighborhood Standardization and k Nearest Neighbors (ILNS- kNN) method was proposed. Firstly, K local neighbor set of k neighbors of the sample was found. Secondly, the sample was standardized to obtain the standard sample by using mean and standard deviation of K local neighbor set. Finally, fault detection was carried out by calculating the cumulative neighbor distance of samples in the standard sample set. The center of each stage data was shifted to the origin by Improved Local Neighborhood Standardization (ILNS), and dispersion degree of each stage data was adjusted approximately to the same, then the multistage process data was fused to single stage data obeying multivariate Gauss distribution. The fault detection of penicillin fermentation process experiment was carried out. The experimental results show that the ILNS- kNN method has more than 97% detection rate for six types of faults. The ILNS- kNN method can detect faults not only in general multistage process, but also in multistage process with significant different variances. It is better to ensure the safety of multistage process and the high quality of product.
Reference | Related Articles | Metrics
Retinal vessel segmentation algorithm based on hybrid phase feature
LI Yuanyuan, CAI Yiheng, GAO Xurong
Journal of Computer Applications    2018, 38 (7): 2083-2088.   DOI: 10.11772/j.issn.1001-9081.2017123045
Abstract500)      PDF (1042KB)(322)       Save
Focusing on the issue that the phase consistency feature is deficient in detection of vascular center, a new retinal vessel segmentation algorithm based on hybrid phase feature was proposed. Firstly, an original retinal image was preprocessed. Secondly, every pixel was represented by a 4-D vector composed of Hessian matrix, Gabor transformation, Bar-selective Combination Of Shifted FIlter REsponses (B-COSFIRE) and phase feature. Finally, Support Vector Machine (SVM) was used for pixel classification to realize the segmentation of retinal vessels. Among the four features, phase feature was a new hybrid phase feature formed by phase consistency feature and Hessian matrix feature through wavelet fusion. This new phase feature not only preserves good vascular edge information by phase consistency feature, but also compensates for the deficient detection of vascular center by phase consistency feature. The average Accuracy (Acc) of the proposed algorithm evaluated on the Digital Retinal Images for Vessel Extraction (DRIVE) database is 0.9574, and the average Area Under receiver operating characteristic Curve (AUC) is 0.9702. In the experiment of using single feature for vessel extraction through pixel classification, compared with using phase consistency feature, using hybrid phase feature for vessel extraction improves the average Accuracy (Acc) from 0.9191 to 0.9478, the AUC from 0.9359 to 0.9702. The experimental results show that hybrid phase feature is more suitable for retinal vessel segmentation based on pixel classification than phase consistency feature.
Reference | Related Articles | Metrics
Local outlier factor fault detection method based on statistical pattern and local nearest neighborhood standardization
FENG Liwei, ZHANG Cheng, LI Yuan, XIE Yanhong
Journal of Computer Applications    2018, 38 (4): 965-970.   DOI: 10.11772/j.issn.1001-9081.2017092310
Abstract387)      PDF (783KB)(370)       Save
A Local Outlier Factor fault detection method based on Statistics Pattern and Local Nearest neighborhood Standardization (SP-LNS-LOF) was proposed to deal with the problem of unequal batch length, mean drift and different batch structure of multi-process data. Firstly, the statistical pattern of each training sample was calculated; secondly, each statistical modulus was standardized as standard sample by using the set of local neighbor samples; finally the local outlier factor of standard sample was calculated and used as a detection index. The quintile of the local outlier factor was used as the detection control limit, when the local outlier factor of the online sample was greater than the detection control limit, the online sample was identified as a fault sample, otherwise it was a normal sample. The statistical pattern was used to extract the main information of the process and eliminate the impact of unequal length of batches; the local neighborhood normalization was used to overcome the difficulties of mean shift and different batch structure of process data; the local outlier factor was used to measure the similarity of samples and separate the fault samples from the normal samples. The simulation experiment of semiconductor etching process was carried out. The experimental results show that SP-LNS-LOF detects all 21 faults, and has higher detection rate than that of Principal Component Analysis (PCA), kernel PCA (kPCA), Fault Detection using k Nearest Neighbor rule (FD-kNN) and Local Outlier Factor (LOF) methods. The theoretical analysis and simulation result show that SP-LNS-LOF is suitable for fault detection of multimode process, and has high fault detection efficiency and ensures the safety of the production process.
Reference | Related Articles | Metrics
Simplified Slope One algorithm for online rating prediction
SUN Limei, LI Yue, Ejike Ifeanyi Michael, CAO Keyan
Journal of Computer Applications    2018, 38 (2): 497-502.   DOI: 10.11772/j.issn.1001-9081.2017082493
Abstract419)      PDF (939KB)(454)       Save
In the era of big data, personalized recommendation system is an effective means of information filtering. One of the main factors that affect the prediction accuracy is data sparsity. Slope One online rating prediction algorithm uses simple linear regression model to solve data sparisity problem, which is easy to implement and has quick score rating, but its training stage needs to be offline because of high time and space consumption when generating differences between items. To solve above problems, a simplified Slope One algorithm was proposed, which simplified the most time-consuming procedure in Slope One algorithm when generating items' rating difference in the training stage by using each item's historical average rating to get the rating difference. The simplified algorithm reduces the time and space complexity of the algorithm, which can effectively improve the utilization rate of the rating data and has better adaptability to sparse data. In the experiments, rating records in Movielens data set were ordered by timestamps then divided into the training set and test set. The experimental results show that the accuracy of the proposed simplified Slope One algorithm is closely approximated to the original Slope One algorithm, but the time and space complexity are lower than that of Slope One, it means that the simplified Slope One algorithm is more suitable for large-scale recommendation system applications with rapid growth of data.
Reference | Related Articles | Metrics
k-core filtered influence maximization algorithms in social networks
LI Yuezhi, ZHU Yuanyuan, ZHONG Ming
Journal of Computer Applications    2018, 38 (2): 464-470.   DOI: 10.11772/j.issn.1001-9081.2017071820
Abstract454)      PDF (1080KB)(540)       Save
Concerning the limited influence scope and high time complexity of existing influence maximization algorithms in social networks, a k-core filtered algorithm based on independent cascade model was proposed. Firstly, an existing influence maximization algorithm was introduced, its rank of nodes does not depend on the entire network. Secondly, pre-training was carried out to find the value of k which has the best optimization effect on existing algorithms but has no relation with the number of selected seeds. Finally, the nodes and edges that do not belong to the k-core subgraph were filtered by computing the k-core of the graph, then the existing influence maximization algorithms were applied on the k-core subgraph, thus reducing computational complexity. Several experiments were conducted on datasets with different scale to prove that the k-core filtered algorithm has different optimization effects on different influence maximization algorithms. After combined with k-core filtered algorithm, compared with the original Prefix excluding Maximum Influence Arborescence (PMIA) algorithm, the influence range is increased by 13.89% and the execution time is reduced by as much as 8.34%; compared with the original Core Covering Algorithm (CCA), the influence range has no obvious difference and the execution time is reduced by as much as 28.5%; compared with the original OutDegree algorithm, the influence range is increased by 21.81% and the execution time is reduced by as much as 26.96%; compared with the original Random algorithm, the influence range is increased by 71.99% and the execution time is reduced by as much as 24.21%. Furthermore, a new influence maximization algorithm named GIMS (General Influence Maximization in Social network) was proposed. Compared with PIMA and Influence Rank Influence Estimation (IRIE), it has wider influence range while still keeping execution time at second level. When it was combined with k-core filtered algorithm, the influence range and execution time do not have significant change. The experimiental results show that k-core filtered algorithm can effectively increase the influence ranges of existing algorithms and reduce their execution times; in addition, the proposed GIMS algorithm has wider influence range and better efficiency, and it is more robust.
Reference | Related Articles | Metrics
Multi-modal process fault detection method based on improved partial least squares
LI Yuan, WU Haoyu, ZHANG Cheng, FENG Liwei
Journal of Computer Applications    2018, 38 (12): 3601-3606.   DOI: 10.11772/j.issn.1001-9081.2018051183
Abstract297)      PDF (908KB)(300)       Save
Partial Least Squares (PLS) as the traditional data-driven method has the problem of poor performance of multi-modal data fault detection. In order to solve the problem, a new fault detection method was proposed, which called PLS based on Local Neighborhood Standardization (LNS) (LNS-PLS). Firstly, the original data was Gaussized by LNS method. On this basis, the monitoring model of PLS was established, and the control limits of T 2 and Squared Prediction Error (SPE) were determined. Secondly, the test data was also standardized by the LNS, and then the PLS monitoring indicators of test data were calculated for process monitoring and fault detection, which solved the problem of unable to deal with multi-modal by PLS. The proposed method was applied to numerical examples and penicillin production process, and its test results were compared with those of Principal Component Analysis (PCA), K Nearest Neighbors ( KNN) and PLS. The experimental results show that, the proposed method is superior to PLS, KNN and PCA in fault detection. The proposed method has high accuracy in classification and multi-modal process fault detection.
Reference | Related Articles | Metrics
Prediction of Parkinson’s disease based on multi-task regression of model filtering
LIU Feng, JI Wei, LI Yun
Journal of Computer Applications    2018, 38 (11): 3221-3224.   DOI: 10.11772/j.issn.1001-9081.2018041329
Abstract451)      PDF (750KB)(411)       Save
The traditional speech-based Parkinson's Disease (PD) prediction method is to predict the motor Unified Parkinson's Disease Rating Scale (motor-UPDRS) and the total Unified Parkinson's Disease Rating Scale (total-UPDRS) respectively. In order to solve the problem that the traditional method could not use the shared information between tasks and the poor prediction performance in the process of single task prediction, a multi-task regression method based on model filtering was proposed to predict the motor-UPDRS and total-UPDRS of Parkinson's disease patients. Firstly, considering the different effects of the subtask speech features on the predicted motor-UPDRS and total-UPDRS, an L1 regularization term was added for feature selection. Secondly, according to different Parkinson's patient objects distributed in different domains, a filtering mechanism was added to improve the prediction accuracy. In the simulation experiments of remote Parkinson data set, the Mean Absolute Error (MAE) of motor-UPDRS is 67.2% higher than that of the Least Squares (LS) method. Compared with the Classification And Regression Tree (CART) in the single task, the motor value increased by 64% and the total value increased by 78.4%. The results of experiment show that multi-task regression based on model filtering is superior to the single task regression algorithm for UPDRS prediction.
Reference | Related Articles | Metrics
Macroeconomic forecasting method fusing Weibo sentiment analysis and deep learning
ZHAO Junhao, LI Yuhua, HUO Lin, LI Ruixuan, GU Xiwu
Journal of Computer Applications    2018, 38 (11): 3057-3062.   DOI: 10.11772/j.issn.1001-9081.2018041346
Abstract545)      PDF (994KB)(677)       Save
The rapid development of modern market economy is accompanied by higher risks. Forecasting regional investment in advance can find investment risks in advance so as to provide reference for investment decisions of countries and enterprises. Aiming at the lag of statistical data and the complexity of internal relations in macroeconomic forecasting, a prediction method of Long Short-Term Memory based on Weibo Sentiment Analysis (SA-LSTM) was proposed. Firstly, considering the strong timeliness of Weibo texts, a method of Weibo text crawling and sentiment analysis was determined to obtain Weibo text sentiment propensity scores. Then total investment in the region was forecasted by combing with structured economic indicators government statistics and Long Short-Term Memory (LSTM) networks. The experimental results in four actual datasets show that SA-LSTM can reduce the relative error of prediction by 4.95, 0.92, 1.21 and 0.66 percentage points after merging Weibo sentiment analysis. Compared with the best method in the four methods of AutoRegressive Integrated Moving Average model (ARIMA), Linear Regression (LR), Back Propagation Neural Network (BPNN), and LSTM, SA-LSTM can significantly reduce the relative error of prediction by 0.06, 0.92, 0.94 and 0.66 percentage points. In addition, the variance of the prediction relative error is the smallest, indicating that the proposed method has good robustness and good adaptability to data jitter.
Reference | Related Articles | Metrics
Lip motion recognition of speaker based on SIFT
MA Xinjun, WU Chenchen, ZHONG Qianyuan, LI Yuanyuan
Journal of Computer Applications    2017, 37 (9): 2694-2699.   DOI: 10.11772/j.issn.1001-9081.2017.09.2694
Abstract534)      PDF (914KB)(427)       Save
Aiming at the problem that the lip feature dimension is too high and sensitive to the scale space, a technique based on the Scale-Invariant Feature Transform (SIFT) algorithm was proposed to carry out the speaker authentication. Firstly, a simple video frame image neat algorithm was proposed to adjust the length of the lip video to the same length, and the representative lip motion pictures were extracted. Then, a new algorithm based on key points of SIFT was proposed to extract the texture and motion features. After the integration of Principal Component Analysis (PCA) algorithm, the typical lip motion features were obtained for authentication. Finally, a simple classification algorithm was presented according to the obtained features. The experimental results show that compared to the common Local Binary Pattern (LBP) feature and the Histogram of Oriental Gradient (HOG) feature, the False Acceptance Rate (FAR) and False Rejection Rate (FRR) of the proposed feature extraction algorithm are better, which proves that the whole speaker lip motion recognition algorithm is effective and can get the ideal results.
Reference | Related Articles | Metrics
Excitation piecewise expansion method for speech bandwidth expansion based on hidden Markov model
GUO Leiyong, LI Yu, LIN Shengyi, TAN Hongzhou
Journal of Computer Applications    2017, 37 (8): 2416-2420.   DOI: 10.11772/j.issn.1001-9081.2017.08.2416
Abstract461)      PDF (810KB)(560)       Save
Speech bandwidth expansion is used to enhance the auditory quality by artificially recovering the lost components in the high-band spectrum of narrow-band speech. Aiming at the problem of excitation expansion in speech source-filter extension model, a piecewise extension method was proposed. The higher spectrum part in the narrow-band excitation source and the white noise with the equivalent narrow-band excitation frame energy were used as the excitation sources for the lower and upper part of the extension band respectively. At last, the wideband excitation signal was composed of the above two and the original narrow band one. Experimental results of the wide band speech reconstruction with Hidden Markov Model (HMM) based spectrum envelope estimation show that the proposed method is superior to spectrum shift excitation expansion method.
Reference | Related Articles | Metrics
Online feature selection based on feature clustering ensemble technology
DU Zhenglin, LI Yun
Journal of Computer Applications    2017, 37 (3): 866-870.   DOI: 10.11772/j.issn.1001-9081.2017.03.866
Abstract465)      PDF (1000KB)(461)       Save
According to the new application scenario with both historical data and stream features, an online feature selection based on group feature selection algorithm and streaming features was proposed. To compensate for the shortcomings of single clustering algorithm, the idea of clustering ensemble was introduced in the group feature selection of historical data. Firstly, a cluster set was obtained by multiple clustering using k-means method, and the final result was obtained by integrating hierarchical clustering algorithm in the integration stage. In the online feature selection phase of the stream feature data, the feature group generated by the group structure was updated by exploring the correlation among the features, and finally the feature subset was obtained by group transformation. The experimental results show that the proposed algorithm can effectively deal with the online feature selection problem in the new scenario, and has good classification performance.
Reference | Related Articles | Metrics
Modal parameter identification of vibration signal based on unsupervised learning convolutional neural network
FANG Ning, ZHOU Yu, YE Qingwei, LI Yugang
Journal of Computer Applications    2017, 37 (3): 786-790.   DOI: 10.11772/j.issn.1001-9081.2017.03.786
Abstract736)      PDF (905KB)(678)       Save
Aiming at the problem that most of the existing time-domain modal parameter identification methods are difficult to set order and resist noise poorly, an unsupervised learning Convolution Neural Network (CNN) method for vibration signal modal identification was proposed. The proposed algorithm was improved on the basis of CNN. Firstly, the CNN applied to two-dimensional image processing was changed into the CNN to deal with one-dimensional signal. The input layer was changed into the vibration signal set of modal parameters to be extracted, and the intermediate layer was changed into several one-dimensional convolution layers, sampled layers, and output layer was the set of N-order modal parameters corresponding to the signal. Then, in the error evaluation, the network calculation result ( N-order modal parameter set) was reconstructed by the vibration signals. Finally, the squared sum of the difference between the reconstructed signal and the input signal was taken as the network learning error, which makes the network become an unsupervised learning network, and avoids the ordering problem of modal parameter extraction algorithm. The experimental results show that when the constructed CNN is applied to modal parameter extraction, compared with the Stochastic Subspace Identification (SSI) algorithm and its Local Linear Embedding (LLE) algorithm, the convolutional neural network identification accuracy is higher than that of the SSI algorithm and the LLE algorithm under noise interference. It has strong noise resistance and avoids the ordering problem.
Reference | Related Articles | Metrics